Notes (Week 1 Wednesday)
These are the notes I wrote during the lecture.
We were building something called *first-order predicate logic* ...and we were building it in layers. First (bottom-most) layer: terms - Variables, - Constants, - Function symbols applied to other terms x + 5 x + (y + 3) y ∪ z <-- could also be a term, depending on whether ∪ is a function symbol under consideration or not Second layer: simple formulae Simple formula consists of a *predicate symbol* that makes a truth claim about a number of terms (possibly 0 terms) Examples of predicate symbols: ≤ (truth claim about numers) = (truth claim about... things) ⊆ (truth claim about sets) Examples of simple formulae 3 ≤ x + 2 ∅ ⊆ X ... Third layer: propositional formulae Simple formulae composed using the standard logical operators ∧ conjunction ∨ disjunction → implication ¬ negation Examples of propositional formulae: x + 1 = 3 ∧ x ≥ 5 x ≥ 5 .... Q: does it have to be true? A: No, it just has to be syntactically well-formed x = x + 1 <-- is still a valid formula Q: Did we dodge Russell's paradox by building things in layers like we did? A: We sidestepped it. The paradox results from this term: {x | x ∉ x} This term (and naive set comprehensions in general) allow us to build a set from a predicate about sets. If the "type" of a set comprehension is {x | P x} : Set then P is a predicate on sets P : Set -> Bool to make Russell's paradox (and in general other similar strange self-referential loops), we need exactly this ability to mix layers, to get a term to to talk about itself. But: by making sure that terms can't be defined in terms of predicates, you can't even write this set comprehension. if(guard) { true branch } else { false branch } you can also call the guard a condition if you prefer idc if l ≤ m < h: ... else: ... #what's true here? A: the guard must be false ¬(l ≤ m < h) ≡ (syntactic sugar) ¬(l ≤ m ∧ m < h) ≡ (de Morgan) ¬(l ≤ m) ∨ ¬(m < h) ≡ (facts of arithmetic) l > m ∨ m ≥ h de Morgan's laws allow you to distirbute negation over conjunction and disjunction ¬(A ∧ B) ≡ ¬A ∨ ¬B ¬(A ∨ B) ≡ ¬A ∧ ¬B If these programs are the same, then the program on the left should do thing2 iff the program on the right does. For the program on the left to do thing2, the following needs to hold: l > m ∨ m ≥ h (1) m < l (2) ...for the program on the right to do thing2, the following the needs to hold: l > m ∨ m ≥ h (1) m < h (3) The programs are different if the following is satisfiable: l > m ∨ m ≥ h whenever the outermost if guard is false ⇛ ...then (m < l ∧ m ≥ h) ...one program get to thing1 and another to thing2 (m < l ∧ m ≥ h) ≡ (laws of arithmetic) (m < l ∧ h ≤ m) ≡ (commutativity of ∧) (h ≤ m ∧ m < l) ⇛ (laws of arithmetic) h < l Conclusion: if h < l, then we can observe a difference between the programs. Commutativity of ∧: A ∧ B ≡ B ∧ A Q: would the calculation have been easier if we introduced abbreviations for the various inequalities? A: maybe nicer to read, but we still have to use properties of ≤ < ... n + 1 = m <--- this formula may or may not be true, depending on what state we're considering the formula in. A state is a mapping (function if you prefer) from variable names to values. For example, if the state is something like: n ↦ 3, m ↦ 23624727 ... then the formula is false But in the state n ↦ 3, m ↦ 4 ... then the formula is true States are also called *valuations* or *environments* but, the word "state" is meant to be suggestive of programming. (x ≤ 5)[x := 7] = 7 ≤ 5 (x ≤ 5)[y := 7] = x ≤ 5 (∀x. x ≤ 5)[x := 3] = (∀x. x ≤ 5) ^ substitution replaces all *free* occurences, but leaves the bound occurences untouched At a first glance, we might expect that this should happen: (∀x. 0 ≤ y)[y := x] = (∀x. 0 ≤ x) ..but that's an instance of *variable* capture (we don't want) (∀x. 0 ≤ y)[y := x] ^binding ^free (∀x. 0 ≤ x) ^binding ^ bound Remember: we don't want the choices of placeholders to matter, so the above should behave the same as (∀z. 0 ≤ y)[y := x] = (∀z. 0 ≤ x) Two solutions: - define *capture-avoiding* substitution; assume that the substitution automatically renames bound variables to prevent clashes when necessary <-- we'll mostly do this (∀x. 0 ≤ y)[y := x] = (∀w. 0 ≤ x) - say that substitution is undefined when free and bound variables clash (∀x. 0 ≤ y)[y := x] = (undefined) Sometimes people write comments like this: x := 5 // sets x to 5 (comments that explain what the code *does*) We're instead going to write comments that say what is true when execution is here. x := 5 // x is 5 // y+1 is odd x := y + 1 // x is odd Using substitutions, we can "push comments upwards" in our code to work out what needs to be true earlier to make the later comment true. Scenario: I have a vague recollection that I can swap two variables, using an intermediate variable. But I only vaguely remember how to. Start by writing a comment describing what I want to happen. swap x and y. Convention: lower-case variable names denote program variables upper-case variables names are placeholders for constants In this case, think: x means "current value of x" X means "original value of x" // y = Y ∧ x = X temp := y // temp = Y ∧ x = X y := x // temp = Y ∧ y = X x := temp // x = Y ∧ y = X What I did here was: write a swap procedure, *and* a proof that the swap procedure is correct, at the same time. Idea: start from what we want to be true at the end, then work backwards to bring that closer to a statement about the starting state. If your comment isn't quite right yet, add more lines of code to make it more right. (This has a fancy name: program synthesis) // y = Y ∧ x = X // y = Y ∧ y = X temp := y // temp = Y ∧ y = X x := temp // x = Y ∧ y = X Exponentiation program: compute B^E // True (assuming 0^0 = 1) // 1 = B^0 // 1 = B^(E-E) x := 1 // x = B^(E-E) ∧ B = B b := B // x = B^(E-E) ∧ b = B e := E // x = B^(E-e) ∧ b = B while(e ≠ 0) { // x = B^(E-e) ∧ e ≠ 0 // x*b = B*B^(E-e)∧ b = B // x*b = B^(E-(e-1))∧ b = B x := x * b // x = B^(E-(e-1)) ∧ b = B e := e-1 // x = B^(E-e) ∧ b = B } // x = B^(E-e) ∧ b = B ∧ e = 0 // x = B^E Find something that *implies* x = B^E together with e = 0 (the negation of the loop guard) Something about this approach that I'm a huge fan of: - if we think about the execution of the program, the loop runs many times - but in the proof, the loop only "runs" once - the proof follows the *structure* of the code, not its execution Q: We're also assuming that the loop terminates, right? A: Yes! We didn't actually prove termination here. We proved something called "partial correctness". *If* the program terminates, then it terminates in a state where the last comment is true You would then have to separately prove termination. x := 1 b := B e := E while(e ≠ 0) { if(even(e)) { // B^E = x * (b*b)^(e/2) e := e / 2 // B^E = x * (b*b)^e b := b * b // B^E = x * b^e } else { x := x * b e := e-1 } // B^E = x * b^e } // B^E = x * b^e ∧ e = 0 // x = B^E (we didn't have time to finish this proof during the lecture)